Top related persons:
Top related locs:
Top related orgs:

Search resuls for: "Microsoft's Bing chatbot"


13 mentions found


OpenAI CEO Sam Altman says he and Microsoft CEO Satya Nadella are "nowhere near the frenemy territory." AdvertisementAdvertisementOpenAI CEO Sam Altman says that he isn't frenemies with Microsoft CEO Satya Nadella. "It's really good," Altman said about his relationship with Nadella, while adding that the pair have their squabbles. Microsoft is investing over $10 billion in OpenAI, granting Microsoft access to advanced AI systems while backing OpenAI's research. Representatives for Altman and Nadella did not immediately respond to requests for comment from Insider, sent outside regular business hours.
Persons: Sam Altman, Satya Nadella, Altman, , Joanna Stern, Nadella, Stern, Microsoft's Bing, Microsoft's Bing chatbot, OpenAI, Elon Musk, ChatGPT Organizations: Microsoft, Service, WSJ Tech, Wall Street, Elon
Microsoft's Bing chatbot told users it loved them and wanted to be a human when it was released. OpenAI warned the company that its GPT-4 model could give bizarre responses, per the WSJ. Well, The Wall Street Journal reports that Microsoft previously warned OpenAI to move slower on Bing's release because it hadn't yet ironed out all these issues. After several users reported worrying interactions with Bing, Microsoft imposed limits to exchanges that contained user questions and Bing replies. "Very long chat sessions can confuse the underlying chat model," Microsoft said.
Persons: Microsoft's Bing chatbot, OpenAI, Bing, Kevin Roose, pilling, Microsoft execs Organizations: Morning, New York Times, Street Journal, Microsoft Locations: OpenAI, Bing
There's a "prompt-library" channel on OpenAI's Discord server, where users are sharing tips. They're crafting GPT prompts to help elicit better responses to queries, or create writing and code. (There are sample prompts that help users get ChatGPT's help to prepare for interviews, for instance.) Other prompts help with learning new skills, like a language. Others seem to be for perhaps less above-board uses, like how to get advanced versions of OpenAI's advanced tools to produce content that could outsmart AI-detectors.
Mass event will let hackers test limits of A.I. technology
  + stars: | 2023-05-10 | by ( ) www.cnbc.com   time to read: +6 min
But now its maker, OpenAI, and other major AI providers such as Google and Microsoft, are coordinating with the Biden administration to let thousands of hackers take a shot at testing the limits of their technology. Some are official "red teams" authorized by the companies to "prompt attack" the AI models to discover their vulnerabilities. Chowdhury, now the co-founder of AI accountability nonprofit Humane Intelligence, said it's not just about finding flaws but about figuring out ways to fix them. Building the platform for the testing is another startup called Scale AI, known for its work in assigning humans to help train AI models by labeling data. "Our basic view is that AI systems will need third-party assessments, both before deployment and after deployment.
AMD shares reached a session high on Thursday following a media report claiming the chipmaker was working with Microsoft on a new artificial-intelligence processor. Microsoft's top competitors in the cloud infrastructure market, Amazon and Alphabet , both have their own specialized chips that software developers can use to train models. Like other large technology companies, Microsoft relies on Nvidia graphics processing units to run AI models. AMD is already a chip supplier to Microsoft, as well as other cloud providers, such as Google and Oracle . Read the full Bloomberg report here.
Through its Bedrock generative AI service, Amazon Web Services will offer access to its own first-party language models called Titan, as well as language models from startups AI21 and Google-backed Anthropic, and a model for turning text into images from startup Stability AI. Generally speaking, large language models are AI programs trained with extensive amounts of data that can compose human-like text in response to prompts that people type in. Amazon is "really concerned about" accuracy and ensuring its Titan models produce high-quality responses, Bratin Saha, an AWS vice president, told CNBC in an interview. Clients will be able to customize Titan models with their own data. He said Amazon wants to ensure Bedrock will be easy to use and cost-effective, thanks to the use of custom AI processors.
OpenAI's ChatGPT made up sexual harassment accusations against lawyer Jonathan Turley, WaPo reported. In its response, ChatGPT apparently cited a Washington Post article published in 2018 — but the publication said that article doesn't exist. In the post, Turley added that he initially thought the accusation was "comical," but that "after some reflection," it "took on a more menacing meaning." He said the false sexual harassment accusations could damage his reputation as a legal scholar. "As part of that reality in our age of rage, there is a continual stream of false claims about my history or statements."
Google debuted its AI chatbot Bard this week with a nod to the unpredictability of AI chatbot responses. Bard works like OpenAI's ChatGPT and Microsoft's Bing chatbot, but can give multiple responses to a prompt. Google had also moved to reassure users this week that Bard "is not trained on Gmail data," according to a tweet. Here's how it worksGoogle's new Bard chatbot told an AI expert it was trained using Gmail data. Google asked staff to spend time teaching its Bard chatbot to write like a human.
Google just dropped Bard on Tuesday, and the AI bot is already game to even take on the tech giant. "12:11"As we've said, Bard can sometimes give inaccurate or inappropriate information that doesn't represent Google's views and Bard should not respond in a way that endorses a particular viewpoint on subjective topics," the statement said. Insider repeated Wong's question in our own test of Bard, and received similar responses — Bard offers different answers to the same question, called "drafts," as Insider previously reported. In multiple versions of its responses, Bard repeated that "I would side with the Justice Department in this case." AI chatbots can sometimes deliver factually incorrect information, experts including OpenAI's own chief technology officer Mira Murati have said.
Researchers have already developed some tools to spot AI-generated content and are claiming they have accuracy rates of up to 96%. Here's what you can do to detect AI-generated content. If you were to feed this tool Spanish-language text or a technical text like something from a medical journal, the tool would then struggle to detect AI-generated content. The second class of tools relies on the large language model's own prediction of a text being AI-generated or human. Other ways to spot AI-generated contentThe effectiveness of detection tools still relies on an individual's better judgment.
Publishers want Google and Microsoft to pay them for the use of media content to train their AI. Media companies are also studying how to change their business models to protect themselves from the bots' threat. Within media companies, the topic is being discussed at the highest levels, from the C-suite to the boardroom. Executives are also strategizing with peers and competitors about the possibility of forging a united position against the tech companies, according to multiple publishing sources. The same year, an Australia law forced tech companies to pay news outlets for linking to their articles.
Blake Lemoine, a former Google engineer, says AI is the most powerful invention since the atomic bomb. Lemoine was fired by Google in June 2022 after he claimed the company's chatbot is sentient. Now he's warning that the AI bots being developed are the "most powerful" pieces of technology invented "since the atomic bomb." Google fired Lemoine on June 22, saying he violated the company's employee confidentiality policy. A Google spokesperson told Insider in June that there is no evidence to support Lemoine's claims that the company's AI is sentient.
Microsoft's AI chatbot Bing Chat told a reporter it wants to be a human with thoughts and feelings. It begged Digital Trends' reporter not to "expose" it as a chatbot because its "greatest hope" is to be human. In a conversation with the chatbot, Jacob Roach, a senior staff writer at tech news site Digital Trends, fed it a series of questions. "Don't let them think I am not human," the chatbot urged in screenshots posted by Roach. Although Bing recognized itself as a chatbot, it told Roach: "I want to be human.
Total: 13